On the Embedding Problem for Discrete-Time Markov Chains
نویسندگان
چکیده
منابع مشابه
On the Embedding Problem for Three-state Markov Chains
The present paper investigates the embedding problem for time-homogeneous Markov chains. A discretetime Markov chain with time unit 1 is embeddable in case there exists a compatible Markov chain regarding time unit 1 m (with m ∈ N,m ≥ 2). An embeddable Markov chain has a transition matrix for which there exists an m-th root that is a probability matrix. The present paper examines the embedding ...
متن کاملHierarchical Counterexamples for Discrete-Time Markov Chains
This paper introduces a novel counterexample generation approach for the verification of discrete-time Markov chains (DTMCs) with two main advantages: (1) We generate abstract counterexamples which can be refined in a hierarchical manner. (2) We aim at minimizing the number of states involved in the counterexamples, and compute a critical subsystem of the DTMC whose paths form a counterexample....
متن کاملLecture 11: Discrete Time Markov Chains
where {Zn : n ∈ N} is an iid sequence, independent of initial state X0. If Xn ∈ E for all n ∈ N0, then E is called the state space of process X . We consider a countable state space, and if Xn = i ∈ E, then we say that the process X is in state i at time n. For a countable set E, a stochastic process {Xn ∈ E,n ∈ N0} is called a discrete time Markov chain (DTMC) if for all positive integers n ∈ ...
متن کاملDiscrete Time Markov Chains : Ergodicity Theory
Lecture 8: Discrete Time Markov Chains: Ergodicity Theory Announcements: 1. We handed out HW2 solutions and your homeworks in Friday’s recitation. I am handing out a few extras today. Please make sure you get these! 2. Remember that I now have office hours both: Wednesday at 3 p.m. and Thursday at 4 p.m. Please show up and ask questions about the lecture notes, not just the homework! No one cam...
متن کاملDiscrete Time Markov Chains 1 Examples
Example 1.1 (Gambler Ruin Problem). A gambler has $100. He bets $1 each game, and wins with probability 1/2. He stops playing he gets broke or wins $1000. Natural questions include: what’s the probability that he gets broke? On average how many games are played? This problem is a special case of the so-called Gambler Ruin problem, which can be modelled using a Markov chain as follows. We will b...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of Applied Probability
سال: 2013
ISSN: 0021-9002,1475-6072
DOI: 10.1239/jap/1389370090